|
In mathematical statistics, the Fisher information (sometimes simply called information〔Lehmann & Casella, p. 115〕) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter ''θ'' of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information. In Bayesian statistics, the asymptotic distribution of the posterior mode depends on the Fisher information and not on the prior (according to the Bernstein–von Mises theorem, which was anticipated by Laplace for exponential families).〔Lucien Le Cam (1986) ''Asymptotic Methods in Statistical Decision Theory'': Pages 336 and 618–621 (von Mises and Bernstein). 〕 The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Ronald Fisher (following some initial results by Francis Ysidro Edgeworth). The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics. The Fisher-information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the Wald test. Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey shift invariance have been shown to obey maximum Fisher information.〔Frieden & Gatenby (2013)〕 The level of the maximum depends upon the nature of the system constraints. ==Definition== The Fisher information is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' upon which the probability of ''X'' depends. The probability function for ''X'', which is also the likelihood function for ''θ'', is a function ''f''(''X''; ''θ''); it is the probability mass (or probability density) of the random variable ''X'' conditional on the value of ''θ''. The partial derivative with respect to ''θ'' of the natural logarithm of the likelihood function is called the score. Under certain regularity conditions,〔(【引用サイトリンク】title=Lectures on statistical inference )〕 it can be shown that the first moment of the score (that is, its expected value) is 0: : : The second moment is called the Fisher information: : where, for any given value of ''θ'', the expression E() denotes the conditional expectation over values for ''X'' with respect to the probability function ''f''(''x''; ''θ'') given ''θ''. Note that . A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable ''X'' has been averaged out. Since the expectation of the score is zero, the Fisher information is also the variance of the score. If is twice differentiable with respect to ''θ'', and under certain regularity conditions, then the Fisher information may also be written as〔Lehmann & Casella, eq. (2.5.16).〕 : since : and : Thus, the Fisher information is the negative of the expectation of the second derivative with respect to ''θ'' of the natural logarithm of ''f''. Information may be seen to be a measure of the "curvature" of the support curve near the maximum likelihood estimate of ''θ''. A "blunt" support curve (one with a shallow maximum) would have a low negative expected second derivative, and thus low information; while a sharp one would have a high negative expected second derivative and thus high information. Information is additive, in that the information yielded by two independent experiments is the sum of the information from each experiment separately: : This result follows from the elementary fact that if random variables are independent, the variance of their sum is the sum of their variances. In particular, the information in a random sample of size ''n'' is ''n'' times that in a sample of size 1, when observations are independent and identically distributed. The information provided by a sufficient statistic is the same as that of the sample ''X''. This may be seen by using Neyman's factorization criterion for a sufficient statistic. If ''T''(''X'') is sufficient for ''θ'', then : for some functions ''g'' and ''h''. See sufficient statistic for a more detailed explanation. The equality of information then follows from the following fact: : which follows from the definition of Fisher information, and the independence of ''h''(''X'') from ''θ''. More generally, if is a statistic, then : with equality if and only if ''T'' is a sufficient statistic. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Fisher information」の詳細全文を読む スポンサード リンク
|